probabilistic pca
Review for NeurIPS paper: Decision-Making with Auto-Encoding Variational Bayes
The proposed approach is to 1) train the generative model and inference network using the - variational auto-encoder ELBO (VAE), - importance weighted auto-encoder (IWAE), - reweighted wake-sleep with the wake-step for training the inference network (RWS), and the - chi-squared objective (X-VAE), objectives and pick one that has the maximum evidence, estimated by the IWAE objective with a lot of particles.
Distributed Probabilistic Learning for Camera Networks with Missing Data
Probabilistic approaches to computer vision typically assume a centralized setting, with the algorithm granted access to all observed data points. However, many problems in wide-area surveillance can benefit from distributed modeling, either because of physical or computational constraints. Most distributed models to date use algebraic approaches (such as distributed SVD) and as a result cannot explicitly deal with missing data. In this work we present an approach to estimation and learning of generative probabilistic models in a distributed context where certain sensor data can be missing. In particular, we show how traditional centralized models, such as probabilistic PCA and missing-data PPCA, can be learned when the data is distributed across a network of sensors. We demonstrate the utility of this approach on the problem of distributed affine structure from motion. Our experiments suggest that the accuracy of the learned probabilistic structure and motion models rivals that of traditional centralized factorization methods while being able to handle challenging situations such as missing or noisy observations.
- South America > Chile > Santiago Metropolitan Region > Santiago Province > Santiago (0.04)
- North America > United States > Virginia (0.04)
A Dual Formulation for Probabilistic Principal Component Analysis
De Plaen, Henri, Suykens, Johan A. K.
PCA, but rather in another model based on similar In this paper, we characterize Probabilistic Principal principles. Component Analysis in Hilbert spaces and demonstrate how the optimal solution admits a More recently, Restricted Kernel Machines (Suykens, 2017) representation in dual space. This allows us to develop opened a new door for a probabilistic version of PCA both a generative framework for kernel methods. in primal and dual. They essentially use the Fenchel-Young Furthermore, we show how it englobes Kernel inequality on a variational formulation of KPCA (Suykens Principal Component Analysis and illustrate its et al., 2003; Alaíz et al., 2018) to obtain an energy function, working on a toy and a real dataset.
- Europe > Belgium > Flanders > Flemish Brabant > Leuven (0.05)
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.04)
- North America > United States > Hawaii > Honolulu County > Honolulu (0.04)
- (3 more...)
Theoretical Connection between Locally Linear Embedding, Factor Analysis, and Probabilistic PCA
Ghojogh, Benyamin, Ghodsi, Ali, Karray, Fakhri, Crowley, Mark
Locally Linear Embedding (LLE) is a nonlinear spectral dimensionality reduction and manifold learning method. It has two main steps which are linear reconstruction and linear embedding of points in the input space and embedding space, respectively. In this work, we look at the linear reconstruction step from a stochastic perspective where it is assumed that every data point is conditioned on its linear reconstruction weights as latent factors. The stochastic linear reconstruction of LLE is solved using expectation maximization. We show that there is a theoretical connection between three fundamental dimensionality reduction methods, i.e., LLE, factor analysis, and probabilistic Principal Component Analysis (PCA). The stochastic linear reconstruction of LLE is formulated similar to the factor analysis and probabilistic PCA. It is also explained why factor analysis and probabilistic PCA are linear and LLE is a nonlinear method. This work combines and makes a bridge between two broad approaches of dimensionality reduction, i.e., the spectral and probabilistic algorithms.
- North America > Canada > Ontario > Toronto (0.14)
- North America > Canada > Ontario > Waterloo Region > Waterloo (0.05)
Generative Locally Linear Embedding
Ghojogh, Benyamin, Ghodsi, Ali, Karray, Fakhri, Crowley, Mark
Locally Linear Embedding (LLE) is a nonlinear spectral dimensionality reduction and manifold learning method. It has two main steps which are linear reconstruction and linear embedding of points in the input space and embedding space, respectively. In this work, we propose two novel generative versions of LLE, named Generative LLE (GLLE), whose linear reconstruction steps are stochastic rather than deterministic. GLLE assumes that every data point is caused by its linear reconstruction weights as latent factors. The proposed GLLE algorithms can generate various LLE embeddings stochastically while all the generated embeddings relate to the original LLE embedding. We propose two versions for stochastic linear reconstruction, one using expectation maximization and another with direct sampling from a derived distribution by optimization. The proposed GLLE methods are closely related to and inspired by variational inference, factor analysis, and probabilistic principal component analysis. Our simulations show that the proposed GLLE methods work effectively in unfolding and generating submanifolds of data.
- North America > Canada > Ontario > Toronto (0.14)
- North America > Canada > Ontario > Waterloo Region > Waterloo (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- Research Report (0.50)
- Overview (0.46)
Factor Analysis, Probabilistic Principal Component Analysis, Variational Inference, and Variational Autoencoder: Tutorial and Survey
Ghojogh, Benyamin, Ghodsi, Ali, Karray, Fakhri, Crowley, Mark
This is a tutorial and survey paper on factor analysis, probabilistic Principal Component Analysis (PCA), variational inference, and Variational Autoencoder (VAE). These methods, which are tightly related, are dimensionality reduction and generative models. They asssume that every data point is generated from or caused by a low-dimensional latent factor. By learning the parameters of distribution of latent space, the corresponding low-dimensional factors are found for the sake of dimensionality reduction. For their stochastic and generative behaviour, these models can also be used for generation of new data points in the data space. In this paper, we first start with variational inference where we derive the Evidence Lower Bound (ELBO) and Expectation Maximization (EM) for learning the parameters. Then, we introduce factor analysis, derive its joint and marginal distributions, and work out its EM steps. Probabilistic PCA is then explained, as a special case of factor analysis, and its closed-form solutions are derived. Finally, VAE is explained where the encoder, decoder and sampling from the latent space are introduced. Training VAE using both EM and backpropagation are explained.
- North America > Canada > Ontario > Toronto (0.14)
- North America > Canada > Ontario > Waterloo Region > Waterloo (0.04)
- Asia > Middle East > Jordan (0.04)
- (2 more...)
- Research Report (0.50)
- Instructional Material > Course Syllabus & Notes (0.48)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Statistical Learning > Principal Component Analysis (0.61)
- Information Technology > Artificial Intelligence > Machine Learning > Learning Graphical Models > Directed Networks > Bayesian Learning (0.46)
Self-Paced Probabilistic Principal Component Analysis for Data with Outliers
Zhao, Bowen, Xiao, Xi, Zhang, Wanpeng, Zhang, Bin, Xia, Shutao
Principal Component Analysis (PCA) is a popular tool for dimensionality reduction and feature extraction in data analysis. There is a probabilistic version of PCA, known as Probabilistic PCA (PPCA). However, standard PCA and PPCA are not robust, as they are sensitive to outliers. To alleviate this problem, this paper introduces the Self-Paced Learning mechanism into PPCA, and proposes a novel method called Self-Paced Probabilistic Principal Component Analysis (SP-PPCA). Furthermore, we design the corresponding optimization algorithm based on the alternative search strategy and the expectation-maximization algorithm. SP-PPCA looks for optimal projection vectors and filters out outliers iteratively. Experiments on both synthetic problems and real-world datasets clearly demonstrate that SP-PPCA is able to reduce or eliminate the impact of outliers.
Distributed Probabilistic Learning for Camera Networks with Missing Data
Yoon, Sejong, Pavlovic, Vladimir
Probabilistic approaches to computer vision typically assume a centralized setting, with the algorithm granted access to all observed data points. However, many problems in wide-area surveillance can benefit from distributed modeling, either because of physical or computational constraints. Most distributed models to date use algebraic approaches (such as distributed SVD) and as a result cannot explicitly deal with missing data. In this work we present an approach to estimation and learning of generative probabilistic models in a distributed context where certain sensor data can be missing. In particular, we show how traditional centralized models, such as probabilistic PCA and missing-data PPCA, can be learned when the data is distributed across a network of sensors. We demonstrate the utility of this approach on the problem of distributed affine structure from motion. Our experiments suggest that the accuracy of the learned probabilistic structure and motion models rivals that of traditional centralized factorization methods while being able to handle challenging situations such as missing or noisy observations.